94 research outputs found

    Data Provenance Inference in Logic Programming: Reducing Effort of Instance-driven Debugging

    Get PDF
    Data provenance allows scientists in different domains validating their models and algorithms to find out anomalies and unexpected behaviors. In previous works, we described on-the-fly interpretation of (Python) scripts to build workflow provenance graph automatically and then infer fine-grained provenance information based on the workflow provenance graph and the availability of data. To broaden the scope of our approach and demonstrate its viability, in this paper we extend it beyond procedural languages, to be used for purely declarative languages such as logic programming under the stable model semantics. For experiments and validation, we use the Answer Set Programming solver oClingo, which makes it possible to formulate and solve stream reasoning problems in a purely declarative fashion. We demonstrate how the benefits of the provenance inference over the explicit provenance still holds in a declarative setting, and we briefly discuss the potential impact for declarative programming, in particular for instance-driven debugging of the model in declarative problem solving

    Injecting knowledge into deep neural networks

    Get PDF
    Much of the recent hype around artificial intelligence stems from recent advances in Neural Networks, currently the most widely used algorithm that succeeded where other approaches failed for decades. Neural Networks today can leverage large amounts of data to be trained to perform hard tasks such as recognising objects in an image or translating languages. The process they use to perform these tasks is equivalent to a complex pattern recognition procedure which uses some clever mathematics to expose the underlying structure in a body of data. Humans think in a more conceptual way. We build a mental model of our world. We have the ability to extract relationships such as causality between elements involved in learning to perform a task, and the ability to use background knowledge when learning. One of the key challenges in making more human-like artificial intelligence is incorporating these properties of natural learning into the neural network paradigm. Designing such a system which could utilise background knowledge in learning a new task would enable the networks to be trained on much less data, opening up a new world of opportunities for Neural Networks to be applied to tasks which were previously not feasible due to the scarce availability of data. In identifying these challenges, we have been inspired by recent seminal papers within the Deep Learning community, which call for new approaches to enhance deep representations with (common-sense) background knowledge. This is considered as a key enabler to significantly improve the ability of machines to learn new tasks faster and in a domain invariant way. The main practical challenges involved in this research are finding how best to extract and format relevant knowledge from a trained network, and finding how best to inject this knowledge into an untrained network

    Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing

    Full text link
    The development of computer vision and in-situ monitoring using visual sensors allows the collection of large datasets from the additive manufacturing (AM) process. Such datasets could be used with machine learning techniques to improve the quality of AM. This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model. This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.Comment: 4 pages, accepted at the Irish Machine Vision and Image Processing Conference (IMVIP), Galway, August 202

    An adaptive human-in-the-loop approach to emission detection of Additive Manufacturing processes and active learning with computer vision

    Full text link
    Recent developments in in-situ monitoring and process control in Additive Manufacturing (AM), also known as 3D-printing, allows the collection of large amounts of emission data during the build process of the parts being manufactured. This data can be used as input into 3D and 2D representations of the 3D-printed parts. However the analysis and use, as well as the characterization of this data still remains a manual process. The aim of this paper is to propose an adaptive human-in-the-loop approach using Machine Learning techniques that automatically inspect and annotate the emissions data generated during the AM process. More specifically, this paper will look at two scenarios: firstly, using convolutional neural networks (CNNs) to automatically inspect and classify emission data collected by in-situ monitoring and secondly, applying Active Learning techniques to the developed classification model to construct a human-in-the-loop mechanism in order to accelerate the labeling process of the emission data. The CNN-based approach relies on transfer learning and fine-tuning, which makes the approach applicable to other industrial image patterns. The adaptive nature of the approach is enabled by uncertainty sampling strategy to automatic selection of samples to be presented to human experts for annotation.Comment: 7 pages, 9 figures, 1 table. Presented at The 6th IEEE Workshop on Human-in-the-Loop Methods and Future of Work in BigData (IEEE HMData 2022) December 202

    Image Data Augmentation Approaches: A Comprehensive Survey and Future directions

    Full text link
    Deep learning (DL) algorithms have shown significant performance in various computer vision tasks. However, having limited labelled data lead to a network overfitting problem, where network performance is bad on unseen data as compared to training data. Consequently, it limits performance improvement. To cope with this problem, various techniques have been proposed such as dropout, normalization and advanced data augmentation. Among these, data augmentation, which aims to enlarge the dataset size by including sample diversity, has been a hot topic in recent times. In this article, we focus on advanced data augmentation techniques. we provide a background of data augmentation, a novel and comprehensive taxonomy of reviewed data augmentation techniques, and the strengths and weaknesses (wherever possible) of each technique. We also provide comprehensive results of the data augmentation effect on three popular computer vision tasks, such as image classification, object detection and semantic segmentation. For results reproducibility, we compiled available codes of all data augmentation techniques. Finally, we discuss the challenges and difficulties, and possible future direction for the research community. We believe, this survey provides several benefits i) readers will understand the data augmentation working mechanism to fix overfitting problems ii) results will save the searching time of the researcher for comparison purposes. iii) Codes of the mentioned data augmentation techniques are available at https://github.com/kmr2017/Advanced-Data-augmentation-codes iv) Future work will spark interest in research community.Comment: We need to make a lot changes to make its quality bette

    AudRandAug: Random Image Augmentations for Audio Classification

    Full text link
    Data augmentation has proven to be effective in training neural networks. Recently, a method called RandAug was proposed, randomly selecting data augmentation techniques from a predefined search space. RandAug has demonstrated significant performance improvements for image-related tasks while imposing minimal computational overhead. However, no prior research has explored the application of RandAug specifically for audio data augmentation, which converts audio into an image-like pattern. To address this gap, we introduce AudRandAug, an adaptation of RandAug for audio data. AudRandAug selects data augmentation policies from a dedicated audio search space. To evaluate the effectiveness of AudRandAug, we conducted experiments using various models and datasets. Our findings indicate that AudRandAug outperforms other existing data augmentation methods regarding accuracy performance.Comment: Paper has accepted at 25th Irish Machine Vision and Image Processing Conferenc

    A Logic Programming approach for Access Control over RDF

    Get PDF
    The Resource Description Framework (RDF) is an interoperable data representation format suitable for interchange and integration of data, especially in Open Data contexts. However, RDF is also becoming increasingly attractive in scenarios involving sensitive data, where data protection is a major concern. At its core, RDF does not support any form of access control and current proposals for extending RDF with access control do not fit well with the RDF representation model. Considering an enterprise scenario, we present a modelling that caters for access control over the stored RDF data in an intuitive and transparent manner. For this paper we rely on Annotated RDF, which introduces concepts from Annotated Logic Programming into RDF. Based on this model of the access control annotation domain, we propose a mechanism to manage permissions via application-specific logic rules. Furthermore, we illustrate how our Annotated Query Language (AnQL) provides a secure way to query this access control annotated RDF data

    Toxic Effects on Thyroid Gland of Male Adult Lizards (Podarcis Siculus) in Contact with PolyChlorinated Biphenyls (PCBs)-Contaminated Soil

    Get PDF
    Skin exposure is considered a potentially significant but little-studied pathway for PolyChlorinated Biphenyls uptake in terrestrial reptiles. In this study, a native Italian lizard, Podarcis siculus, was exposed to PCBs-contaminated soil for 120 days. Tissues distribution of PCBs, thyroid hormone levels, and thyroid histo-physiopathology were examined. The accumulation of PCBs in skin, plasma, liver, kidney, and brain were highest at 120 days. The alteration of triiodothyronine (T3) and thyroxine (T4) levels after different concentrations and times to exposure of PCBs was accompanied by the changes in the hormones involved in the hypothalamus-pituitary-thyroid (HPT) axis, namely Thyrotropin Releasing Hormone (TRH) and Thyroid Stimulating Hormone (TSH). Moreover, hepatic levels of deiodinase II (5'ORDII) and content of T3 were positively correlated to exposure to PCBs. These results indicated that in lizards, PCBs exposure through the skin has the potential to disrupt the thyroid endocrine system. Overall, the observed results indicate that PCBs could be associated with changes in thyroid homeostasis in these reptiles, through direct interactions with the metabolism of T4 and T3 through the HPT axis or indirect interactions with peripheral deiodination

    An experimental comparison of knowledge transfer algorithms in deep neural networks

    Get PDF
    Neural knowledge transfer methods aim to constrain the hidden representation of one neural network to be similar, or have similar properties, to another by applying specially designed loss functions between the two networks hidden layers. In this way the intangible knowledge encoded by the network's weights is transferred without having to replicate exact weight structures or alter the knowledge representation from its natural highly distributed form. Motivated by the need to enable greater transparency in evaluating such methods by bridging the gap between different experimental setups in the existing literature, the need to cast a wider net in comparing each method to a greater number of its peers and a desire to explore novel combinations of existing methods we conduct an experimental comparison of eight contemporary neural knowledge transfer algorithms and further explore the performance of some combinations. We conduct our experiments on an image classification task and measure relative performance gains over non-knowledge enhanced baseline neural networks in terms of classification accuracy. We observed (i) some interesting contradictions between our results and those reported in original papers, (ii) a general lack of correlation between any given methods standalone performance vs performance when used in combination with knowledge distillation, (iii) a general trend of older simpler methods outperforming newer ones and (iv) Contrastive Representation Distillation (CRD) achieving best performance

    An adaptive human-in-the-loop approach to emission detection of Additive Manufacturing processes and active learning with computer vision

    Get PDF
    Recent developments in in-situ monitoring and process control in Additive Manufacturing (AM), also known as 3D-printing, allows the collection of large amounts of emission data during the build process of the parts being manufactured. This data can be used as input into 3D and 2D representations of the 3D-printed parts. However the analysis and use, as well as the characterization of this data still remains a manual process. The aim of this paper is to propose an adaptive human-in-the-loop approach using Machine Learning techniques that automatically inspect and annotate the emissions data generated during the AM process. More specifically, this paper will look at two scenarios: firstly, using convolutional neural networks (CNNs) to automatically inspect and classify emission data collected by in-situ monitoring and secondly, applying Active Learning techniques to the developed classification model to construct a human-in-the-loop mechanism in order to accelerate the labeling process of the emission data. The CNN-based approach relies on transfer learning and fine-tuning, which makes the approach applicable to other industrial image patterns. The adaptive nature of the approach is enabled by uncertainty sampling strategy to automatic selection of samples to be presented to human experts for annotation
    • 

    corecore